1·A large buffer will not hit this limit as frequently as a smaller buffer.
大型缓冲区不会像较小的缓冲区那样经常达到这个极限。
2·The basic idea is to give each pipeline its own buffer for processing, and then process each buffer a stage at a time.
基本思想是给每个管道分配一个要处理的缓冲区,然后按一次处理一个阶段的方式处理每个缓冲区。
3·This recovery process instantly begins to prefetch dirty pages from the centralized buffer pool into its own local buffer pool.
这个恢复进程立即开始将脏页从集中式缓冲池预取到它自己的本地缓冲池。
4·Transactions requiring disk I/O, such as flushing dirty pages from the buffer pool or flushing logs from the log buffer, may wait.
要求磁盘 I/O 的事务,例如刷新缓冲池中的脏页或者刷新日志缓冲区中的日志,可能需要等待。
5·The simplest usage is to have the SPE program take two Pointers: one for an input buffer and one for an output buffer.
最简单的用法是让SPE程序使用两个指针:一个用于输入缓冲区,另外一个用于输出缓冲区。
6·This finding demonstrates the effect of concurrent transactions on the buffer pool and how changes in the buffer pool affect server performance.
这个结果演示了并发事务对缓冲池的影响,以及缓冲池的大小如何影响服务器的性能。
7·The SPE program reads in the input buffer, processes the data, and then writes it to the output buffer.
SPE 程序从输入缓冲区中读取数据,然后对数据进行处理,再将结果写入输出缓冲区中。
8·If necessary, you can separate data and index into two different buffer pools to help ensure a good index buffer pool hit ratio.
如果必要,可以将数据和索引分隔到两个不同的缓冲池中,以帮助确保一个良好的索引缓冲池命中率。
9·From here, depending on architecture, a call is made to copy from the user buffer to a kernel buffer with zeroing (of unavailable bytes).
从此开始,根据构架,为执行从用户缓冲区到内核缓冲区的零拷贝(不可用字节)而进行一个调用。
10·If your system has a low buffer pool hit ratio, you can increase the buffer pool size further to achieve better application performance result.
如果您的系统具有较低的缓冲池命中率,您就可以通过增加缓冲池大小来进一步地取得更好的应用程序性能结果。